Image classification using CNNs in Keras

Data Description:

The dataset includes images of plant seedlings at various stages of grown. Each image has a filename that is its unique id. The dataset comprises 12 plant species. The goal of the project is to create a classifier capable of determining a plant's species from a photo.

Dataset

The data file names are:

  • images.npy
  • Label.csv

Import all necessary modules and load the data

In [1]:
# Import necessary modules.

import cv2
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns

from tensorflow.keras import datasets, models, layers, optimizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from tensorflow.keras.utils import to_categorical
from google.colab.patches import cv2_imshow
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
In [ ]:
# Mount the drive
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
In [ ]:
#Defining the paths of the datasets
project_path = '/content/drive/My Drive/Colab Notebooks/UT_Austin/Computer_Vision/Project/'
labels_path = project_path + 'Labels.csv'
images_path = project_path + 'images.npy'
In [ ]:
# Loading the datasets
images = np.load(images_path)
labels = pd.read_csv(labels_path)
In [ ]:
# Shape of images dataset
print('Shape of images:', images.shape)

# Shape of labels dataset
print('Shape of labels:', labels.shape)
Shape of images: (4750, 128, 128, 3)
Shape of labels: (4750, 1)
  • The dataset is made of 4750 color images and labels.
  • Images are in (R, G, B) format with a resolution of 128 x 128 pixels.
In [ ]:
# Diplaying the different plant species
labels.value_counts()
Out[ ]:
Label                    
Loose Silky-bent             654
Common Chickweed             611
Scentless Mayweed            516
Small-flowered Cranesbill    496
Fat Hen                      475
Charlock                     390
Sugar beet                   385
Cleavers                     287
Black-grass                  263
Shepherds Purse              231
Maize                        221
Common wheat                 221
dtype: int64
In [ ]:
# Displaying labels dataset
plt.figure(figsize=(15,5))
plt.xticks(rotation=45)
sns.countplot(x="Label", data=labels);
  • The dataset includes 12 different plant species.
  • It is imbalanced as there is not an equal number of images per plant species.
  • The species the most represented is the Loose Silky-bent with 654 images while the species the least represented is the Common wheat or Maize with 221 images.
In [3]:
# Indexes from which plant species changes in  list of labels
indexes = [0]
for i in range(len(labels) - 1):
    if labels.iloc[i,0] != labels.iloc[i+1,0]:
        indexes.append(i+1)
indexes
Out[3]:
[0, 496, 971, 1202, 1423, 2034, 2424, 2711, 3227, 3612, 3833, 4096]
In [4]:
# Visualizing some of the images
for i in indexes:
    fig, axes = plt.subplots(1, 4,  figsize=(20, 5))
    fig.suptitle(labels.iloc[i,0], fontsize=20)
    for j in range(4):
        axes[j].imshow(images[i + j])
In [ ]:
# Pixel intensity histograms for above images
for i in indexes:
    fig, axes = plt.subplots(1, 4,  figsize=(20, 5))
    fig.suptitle(labels.iloc[i,0], fontsize=20)
    for j in range(4):
        sns.histplot(images[i + j].flatten(), ax=axes[j]);

As we can see from the pixel intensity histograms above, there is a lot of digital noise in the images.

Data pre-processing

Normalization

  • We must normalize our data as it is always required in neural network models
  • We can achieve this by dividing the RGB codes with 255 (which is the maximum RGB code minus the minimum RGB code)
  • Make sure that the values are float so that we can get decimal points after division
In [ ]:
# Diplaying range of pixel intensity in 8 bits color images
print('Maximum pixel intensity in images:', images.max())
print('Minimum pixel intensity in images:', images.min())
Maximum pixel intensity in images: 255
Minimum pixel intensity in images: 0
In [5]:
# Normalization in order to have pixel intensity values ranging between 0 and 1
X = images.astype('float32')
X /= 255

Gaussian blurring

Blurring helps removing the noise in the images and prevents the model from overfitting.

In [ ]:
# Blur the image with a kernel size of (5,5)
blur_1 = cv2.GaussianBlur(X[0], (5,5), 0)
# Blur the image with a kernel size of (15,15)
blur_2 = cv2.GaussianBlur(X[0], (15,15), 0)
print('Original Image:  \n')
plt.imshow(X[0])
plt.show()
print('\nFirst blurring: \n')
plt.imshow(blur_1)
plt.show()
print('\nSecond blurring: \n')
plt.imshow(blur_2)
plt.show()
Original Image:  

First blurring: 

Second blurring: 

A kernel size of 5 x 5 seems reasonable to remove the digital noise in the images while still preserving a good definition.

In [6]:
# Applying the Gaussian blurring with 5 x 5 kernel size over the entire image dataset
for i in range(len(X)):
    X[i] = cv2.GaussianBlur(X[i], (5,5), 0)

Visualizing data after pre-processing

In [7]:
# Visualizing some of the images after pre-processing
for i in indexes:
    fig, axes = plt.subplots(1, 4,  figsize=(20, 5))
    fig.suptitle(labels.iloc[i,0], fontsize=20)
    for j in range(4):
        axes[j].imshow(X[i + j])
In [ ]:
# Pixel intensity histograms after pre-processing
for i in indexes:
    fig, axes = plt.subplots(1, 4,  figsize=(20, 5))
    fig.suptitle(labels.iloc[i,0], fontsize=20)
    for j in range(4):
        sns.histplot(X[i + j].flatten(), ax=axes[j]);

The distributions of pixel intensity show a smoother shape after blurring indicating that the digital noise has been removed.

Make data compatible

One hot encoding of labels

In [ ]:
# Creating a dictionary of labels
label_dict = {}
for idx, plant in enumerate(list(labels['Label'].unique())):
    label_dict[plant] = idx
print(label_dict)
{'Small-flowered Cranesbill': 0, 'Fat Hen': 1, 'Shepherds Purse': 2, 'Common wheat': 3, 'Common Chickweed': 4, 'Charlock': 5, 'Cleavers': 6, 'Scentless Mayweed': 7, 'Sugar beet': 8, 'Maize': 9, 'Black-grass': 10, 'Loose Silky-bent': 11}
In [ ]:
# Replacing text labels by numbers
y = labels['Label'].map(label_dict)
In [ ]:
# Convert labels to one-hot-vectors.
y = to_categorical(y)
In [ ]:
# Displaying shape of labels
y.shape
Out[ ]:
(4750, 12)
In [ ]:
y[0]
Out[ ]:
array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)

Train, validation and test sets

In [ ]:
# Splitting into training, validation and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, test_size=0.5, random_state=1)
In [ ]:
# Training, validation and test sets shape
print("X_train shape:", X_train.shape)
print("y_train shape:", y_train.shape)
print("Images in X_train:", X_train.shape[0])
print("Images in X_val:", X_val.shape[0])
print("Images in X_test:", X_test.shape[0])
X_train shape: (3325, 128, 128, 3)
y_train shape: (3325, 12)
Images in X_train: 3325
Images in X_val: 712
Images in X_test: 713

The input data is a 4-D tensor of shape [batch, height, width, channels] which is compatible with Keras models. The first dimension represents the image number, the second dimension the image height, the third the width and the fourth the (R,G,B) color channels.

Building the CNN model

  • Convolutional input layer, 32 feature maps with a size of 5×5 and a rectifier activation function.
  • Batch Normalization Layer.
  • Convolutional layer, 32 feature maps with a size of 5×5 and a rectifier activation function.
  • Batch Normalization layer.
  • Max Pool layer with size 2×2.
  • Dropout layer at 25%.

  • Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function.
  • Batch Normalization layer.
  • Dropout layer at 25%.
  • Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function.
  • Batch Normalization layer.
  • Max Pool layer with size 2×2.
  • Dropout layer at 25%.

  • GlobalMaxPooling2D layer.
  • Fully connected layer with 256 units and a rectifier activation function.
  • Dropout layer at 50%.
  • Fully connected output layer with 12 units and a softmax activation function
In [ ]:
# Set the CNN model

model = models.Sequential()
model.add(layers.Conv2D(32, (5, 5), padding='same', activation="relu", input_shape=X_train.shape[1:]))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Dropout(0.2))
model.add(layers.Conv2D(64, (5, 5), padding='same', activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(64, (3, 3), padding='same', activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Dropout(0.4))
model.add(layers.Conv2D(64, (3, 3), padding='same', activation="relu"))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Dropout(0.5))

model.add(layers.GlobalMaxPooling2D())
model.add(layers.Dense(256, activation="relu"))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(12, activation="softmax"))

model.summary()
Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_12 (Conv2D)           (None, 128, 128, 32)      2432      
_________________________________________________________________
batch_normalization_12 (Batc (None, 128, 128, 32)      128       
_________________________________________________________________
max_pooling2d_12 (MaxPooling (None, 64, 64, 32)        0         
_________________________________________________________________
dropout_15 (Dropout)         (None, 64, 64, 32)        0         
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 64, 64, 64)        51264     
_________________________________________________________________
batch_normalization_13 (Batc (None, 64, 64, 64)        256       
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 32, 32, 64)        0         
_________________________________________________________________
dropout_16 (Dropout)         (None, 32, 32, 64)        0         
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 32, 32, 64)        36928     
_________________________________________________________________
batch_normalization_14 (Batc (None, 32, 32, 64)        256       
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 16, 16, 64)        0         
_________________________________________________________________
dropout_17 (Dropout)         (None, 16, 16, 64)        0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 16, 16, 64)        36928     
_________________________________________________________________
batch_normalization_15 (Batc (None, 16, 16, 64)        256       
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 8, 8, 64)          0         
_________________________________________________________________
dropout_18 (Dropout)         (None, 8, 8, 64)          0         
_________________________________________________________________
global_max_pooling2d_3 (Glob (None, 64)                0         
_________________________________________________________________
dense_6 (Dense)              (None, 256)               16640     
_________________________________________________________________
dropout_19 (Dropout)         (None, 256)               0         
_________________________________________________________________
dense_7 (Dense)              (None, 12)                3084      
=================================================================
Total params: 148,172
Trainable params: 147,724
Non-trainable params: 448
_________________________________________________________________
In [ ]:
# initiate Adam optimizer
opt = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07)
In [ ]:
# Let's train the model using Adam
model.compile(loss='categorical_crossentropy',
              optimizer=opt,
              metrics=['accuracy'])

Fitting and evaluating the model

  • We can fit this model with 500 epochs.
  • In order to prevent the model from overfitting, we will use early stopping and model checkpoint callbacks to interrupt the training and save the weights showing the best performance.
In [ ]:
#Adding Early stopping callback to the fit function is going to stop the training
#if the val_accuracy is not going to change even '0.001' for more than 50 continous epochs

early_stopping = EarlyStopping(monitor='val_loss', min_delta=0.001, patience=50)

#Adding Model Checkpoint callback to the fit function is going to save the weights whenever val_loss achieves a new low value. 
#Hence saving the best weights occurred during training

model_checkpoint =  ModelCheckpoint('checkpoint_{epoch:02d}_loss{val_loss:.4f}.h5',
                                                           monitor='val_loss',
                                                           verbose=1,
                                                           save_best_only=True,
                                                           save_weights_only=True,
                                                           mode='auto')
In [ ]:
history = model.fit(X_train,
                    y_train,
                    batch_size=None,
                    epochs=500,
                    validation_data=(X_val, y_val),
                    shuffle=True,
                    verbose=1,
                    callbacks=[early_stopping,model_checkpoint])

# plot training history
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.legend()
plt.show()
Epoch 1/500
104/104 [==============================] - 4s 30ms/step - loss: 3.3887 - accuracy: 0.2006 - val_loss: 2.5056 - val_accuracy: 0.1419

Epoch 00001: val_loss improved from inf to 2.50557, saving model to checkpoint_01_loss2.5056.h5
Epoch 2/500
104/104 [==============================] - 3s 27ms/step - loss: 1.8872 - accuracy: 0.3377 - val_loss: 2.9779 - val_accuracy: 0.1419

Epoch 00002: val_loss did not improve from 2.50557
Epoch 3/500
104/104 [==============================] - 3s 27ms/step - loss: 1.6536 - accuracy: 0.4283 - val_loss: 2.8089 - val_accuracy: 0.1419

Epoch 00003: val_loss did not improve from 2.50557
Epoch 4/500
104/104 [==============================] - 3s 27ms/step - loss: 1.4860 - accuracy: 0.4776 - val_loss: 2.9187 - val_accuracy: 0.1517

Epoch 00004: val_loss did not improve from 2.50557
Epoch 5/500
104/104 [==============================] - 3s 27ms/step - loss: 1.3632 - accuracy: 0.5305 - val_loss: 2.0030 - val_accuracy: 0.2963

Epoch 00005: val_loss improved from 2.50557 to 2.00303, saving model to checkpoint_05_loss2.0030.h5
Epoch 6/500
104/104 [==============================] - 3s 27ms/step - loss: 1.2765 - accuracy: 0.5696 - val_loss: 2.2630 - val_accuracy: 0.2275

Epoch 00006: val_loss did not improve from 2.00303
Epoch 7/500
104/104 [==============================] - 3s 27ms/step - loss: 1.2620 - accuracy: 0.5684 - val_loss: 1.9605 - val_accuracy: 0.3329

Epoch 00007: val_loss improved from 2.00303 to 1.96052, saving model to checkpoint_07_loss1.9605.h5
Epoch 8/500
104/104 [==============================] - 3s 27ms/step - loss: 1.1968 - accuracy: 0.5943 - val_loss: 1.5141 - val_accuracy: 0.4916

Epoch 00008: val_loss improved from 1.96052 to 1.51412, saving model to checkpoint_08_loss1.5141.h5
Epoch 9/500
104/104 [==============================] - 3s 27ms/step - loss: 1.1207 - accuracy: 0.6186 - val_loss: 2.3731 - val_accuracy: 0.1854

Epoch 00009: val_loss did not improve from 1.51412
Epoch 10/500
104/104 [==============================] - 3s 27ms/step - loss: 1.0605 - accuracy: 0.6403 - val_loss: 1.3344 - val_accuracy: 0.5576

Epoch 00010: val_loss improved from 1.51412 to 1.33441, saving model to checkpoint_10_loss1.3344.h5
Epoch 11/500
104/104 [==============================] - 3s 27ms/step - loss: 1.0236 - accuracy: 0.6526 - val_loss: 1.8346 - val_accuracy: 0.3118

Epoch 00011: val_loss did not improve from 1.33441
Epoch 12/500
104/104 [==============================] - 3s 27ms/step - loss: 1.0041 - accuracy: 0.6544 - val_loss: 1.1890 - val_accuracy: 0.6362

Epoch 00012: val_loss improved from 1.33441 to 1.18902, saving model to checkpoint_12_loss1.1890.h5
Epoch 13/500
104/104 [==============================] - 3s 27ms/step - loss: 0.9804 - accuracy: 0.6731 - val_loss: 1.9421 - val_accuracy: 0.3301

Epoch 00013: val_loss did not improve from 1.18902
Epoch 14/500
104/104 [==============================] - 3s 27ms/step - loss: 0.9507 - accuracy: 0.6890 - val_loss: 1.4425 - val_accuracy: 0.4733

Epoch 00014: val_loss did not improve from 1.18902
Epoch 15/500
104/104 [==============================] - 3s 28ms/step - loss: 0.9106 - accuracy: 0.6851 - val_loss: 1.3994 - val_accuracy: 0.5183

Epoch 00015: val_loss did not improve from 1.18902
Epoch 16/500
104/104 [==============================] - 3s 27ms/step - loss: 0.8856 - accuracy: 0.6974 - val_loss: 0.9494 - val_accuracy: 0.7233

Epoch 00016: val_loss improved from 1.18902 to 0.94943, saving model to checkpoint_16_loss0.9494.h5
Epoch 17/500
104/104 [==============================] - 3s 27ms/step - loss: 0.8710 - accuracy: 0.7017 - val_loss: 1.2505 - val_accuracy: 0.5758

Epoch 00017: val_loss did not improve from 0.94943
Epoch 18/500
104/104 [==============================] - 3s 27ms/step - loss: 0.8166 - accuracy: 0.7185 - val_loss: 1.8299 - val_accuracy: 0.2949

Epoch 00018: val_loss did not improve from 0.94943
Epoch 19/500
104/104 [==============================] - 3s 27ms/step - loss: 0.8225 - accuracy: 0.7200 - val_loss: 0.9740 - val_accuracy: 0.6784

Epoch 00019: val_loss did not improve from 0.94943
Epoch 20/500
104/104 [==============================] - 3s 27ms/step - loss: 0.7824 - accuracy: 0.7299 - val_loss: 1.1090 - val_accuracy: 0.6320

Epoch 00020: val_loss did not improve from 0.94943
Epoch 21/500
104/104 [==============================] - 3s 27ms/step - loss: 0.8068 - accuracy: 0.7152 - val_loss: 1.0282 - val_accuracy: 0.6517

Epoch 00021: val_loss did not improve from 0.94943
Epoch 22/500
104/104 [==============================] - 3s 27ms/step - loss: 0.8008 - accuracy: 0.7164 - val_loss: 0.8633 - val_accuracy: 0.7107

Epoch 00022: val_loss improved from 0.94943 to 0.86325, saving model to checkpoint_22_loss0.8633.h5
Epoch 23/500
104/104 [==============================] - 3s 27ms/step - loss: 0.7501 - accuracy: 0.7432 - val_loss: 0.9115 - val_accuracy: 0.6826

Epoch 00023: val_loss did not improve from 0.86325
Epoch 24/500
104/104 [==============================] - 3s 27ms/step - loss: 0.7058 - accuracy: 0.7612 - val_loss: 1.3278 - val_accuracy: 0.5070

Epoch 00024: val_loss did not improve from 0.86325
Epoch 25/500
104/104 [==============================] - 3s 27ms/step - loss: 0.7065 - accuracy: 0.7492 - val_loss: 1.2645 - val_accuracy: 0.5435

Epoch 00025: val_loss did not improve from 0.86325
Epoch 26/500
104/104 [==============================] - 3s 27ms/step - loss: 0.6870 - accuracy: 0.7609 - val_loss: 0.8258 - val_accuracy: 0.7360

Epoch 00026: val_loss improved from 0.86325 to 0.82584, saving model to checkpoint_26_loss0.8258.h5
Epoch 27/500
104/104 [==============================] - 3s 27ms/step - loss: 0.6977 - accuracy: 0.7507 - val_loss: 1.3135 - val_accuracy: 0.4705

Epoch 00027: val_loss did not improve from 0.82584
Epoch 28/500
104/104 [==============================] - 3s 27ms/step - loss: 0.7005 - accuracy: 0.7609 - val_loss: 0.9200 - val_accuracy: 0.6910

Epoch 00028: val_loss did not improve from 0.82584
Epoch 29/500
104/104 [==============================] - 3s 28ms/step - loss: 0.6592 - accuracy: 0.7750 - val_loss: 0.8912 - val_accuracy: 0.6938

Epoch 00029: val_loss did not improve from 0.82584
Epoch 30/500
104/104 [==============================] - 3s 27ms/step - loss: 0.6226 - accuracy: 0.7820 - val_loss: 2.0058 - val_accuracy: 0.3287

Epoch 00030: val_loss did not improve from 0.82584
Epoch 31/500
104/104 [==============================] - 3s 27ms/step - loss: 0.6259 - accuracy: 0.7943 - val_loss: 1.2749 - val_accuracy: 0.5576

Epoch 00031: val_loss did not improve from 0.82584
Epoch 32/500
104/104 [==============================] - 3s 27ms/step - loss: 0.5864 - accuracy: 0.7943 - val_loss: 1.3250 - val_accuracy: 0.5098

Epoch 00032: val_loss did not improve from 0.82584
Epoch 33/500
104/104 [==============================] - 3s 28ms/step - loss: 0.5978 - accuracy: 0.7877 - val_loss: 0.7612 - val_accuracy: 0.7346

Epoch 00033: val_loss improved from 0.82584 to 0.76123, saving model to checkpoint_33_loss0.7612.h5
Epoch 34/500
104/104 [==============================] - 3s 27ms/step - loss: 0.5950 - accuracy: 0.7943 - val_loss: 0.7803 - val_accuracy: 0.7247

Epoch 00034: val_loss did not improve from 0.76123
Epoch 35/500
104/104 [==============================] - 3s 28ms/step - loss: 0.5424 - accuracy: 0.8066 - val_loss: 0.8553 - val_accuracy: 0.7163

Epoch 00035: val_loss did not improve from 0.76123
Epoch 36/500
104/104 [==============================] - 3s 27ms/step - loss: 0.5827 - accuracy: 0.8018 - val_loss: 1.3638 - val_accuracy: 0.5267

Epoch 00036: val_loss did not improve from 0.76123
Epoch 37/500
104/104 [==============================] - 3s 27ms/step - loss: 0.5860 - accuracy: 0.8018 - val_loss: 1.4302 - val_accuracy: 0.4761

Epoch 00037: val_loss did not improve from 0.76123
Epoch 38/500
104/104 [==============================] - 3s 27ms/step - loss: 0.5833 - accuracy: 0.7982 - val_loss: 0.9498 - val_accuracy: 0.7022

Epoch 00038: val_loss did not improve from 0.76123
Epoch 39/500
104/104 [==============================] - 3s 27ms/step - loss: 0.5369 - accuracy: 0.8030 - val_loss: 1.5206 - val_accuracy: 0.4846

Epoch 00039: val_loss did not improve from 0.76123
Epoch 40/500
104/104 [==============================] - 3s 28ms/step - loss: 0.5273 - accuracy: 0.8174 - val_loss: 1.4188 - val_accuracy: 0.5169

Epoch 00040: val_loss did not improve from 0.76123
Epoch 41/500
104/104 [==============================] - 3s 27ms/step - loss: 0.5109 - accuracy: 0.8205 - val_loss: 0.8535 - val_accuracy: 0.7107

Epoch 00041: val_loss did not improve from 0.76123
Epoch 42/500
104/104 [==============================] - 3s 28ms/step - loss: 0.5244 - accuracy: 0.8180 - val_loss: 0.9509 - val_accuracy: 0.6461

Epoch 00042: val_loss did not improve from 0.76123
Epoch 43/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4672 - accuracy: 0.8352 - val_loss: 0.7309 - val_accuracy: 0.7289

Epoch 00043: val_loss improved from 0.76123 to 0.73093, saving model to checkpoint_43_loss0.7309.h5
Epoch 44/500
104/104 [==============================] - 3s 28ms/step - loss: 0.5040 - accuracy: 0.8280 - val_loss: 0.5988 - val_accuracy: 0.7963

Epoch 00044: val_loss improved from 0.73093 to 0.59876, saving model to checkpoint_44_loss0.5988.h5
Epoch 45/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4732 - accuracy: 0.8364 - val_loss: 0.8820 - val_accuracy: 0.6826

Epoch 00045: val_loss did not improve from 0.59876
Epoch 46/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4943 - accuracy: 0.8298 - val_loss: 0.8610 - val_accuracy: 0.6685

Epoch 00046: val_loss did not improve from 0.59876
Epoch 47/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4778 - accuracy: 0.8355 - val_loss: 0.8577 - val_accuracy: 0.6854

Epoch 00047: val_loss did not improve from 0.59876
Epoch 48/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4687 - accuracy: 0.8424 - val_loss: 0.7614 - val_accuracy: 0.7303

Epoch 00048: val_loss did not improve from 0.59876
Epoch 49/500
104/104 [==============================] - 3s 29ms/step - loss: 0.4443 - accuracy: 0.8400 - val_loss: 0.6920 - val_accuracy: 0.7584

Epoch 00049: val_loss did not improve from 0.59876
Epoch 50/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4508 - accuracy: 0.8478 - val_loss: 0.6942 - val_accuracy: 0.7598

Epoch 00050: val_loss did not improve from 0.59876
Epoch 51/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4472 - accuracy: 0.8406 - val_loss: 0.7816 - val_accuracy: 0.7121

Epoch 00051: val_loss did not improve from 0.59876
Epoch 52/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4143 - accuracy: 0.8550 - val_loss: 1.0124 - val_accuracy: 0.6475

Epoch 00052: val_loss did not improve from 0.59876
Epoch 53/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4597 - accuracy: 0.8409 - val_loss: 0.7342 - val_accuracy: 0.7430

Epoch 00053: val_loss did not improve from 0.59876
Epoch 54/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4029 - accuracy: 0.8574 - val_loss: 1.4363 - val_accuracy: 0.5351

Epoch 00054: val_loss did not improve from 0.59876
Epoch 55/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4361 - accuracy: 0.8532 - val_loss: 0.6593 - val_accuracy: 0.7753

Epoch 00055: val_loss did not improve from 0.59876
Epoch 56/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3900 - accuracy: 0.8598 - val_loss: 0.4464 - val_accuracy: 0.8694

Epoch 00056: val_loss improved from 0.59876 to 0.44641, saving model to checkpoint_56_loss0.4464.h5
Epoch 57/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4101 - accuracy: 0.8559 - val_loss: 0.5673 - val_accuracy: 0.8048

Epoch 00057: val_loss did not improve from 0.44641
Epoch 58/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4318 - accuracy: 0.8526 - val_loss: 1.9344 - val_accuracy: 0.3258

Epoch 00058: val_loss did not improve from 0.44641
Epoch 59/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4243 - accuracy: 0.8538 - val_loss: 0.8083 - val_accuracy: 0.7233

Epoch 00059: val_loss did not improve from 0.44641
Epoch 60/500
104/104 [==============================] - 3s 28ms/step - loss: 0.4228 - accuracy: 0.8559 - val_loss: 1.2453 - val_accuracy: 0.5983

Epoch 00060: val_loss did not improve from 0.44641
Epoch 61/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3989 - accuracy: 0.8550 - val_loss: 0.4551 - val_accuracy: 0.8329

Epoch 00061: val_loss did not improve from 0.44641
Epoch 62/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3776 - accuracy: 0.8722 - val_loss: 1.3542 - val_accuracy: 0.5562

Epoch 00062: val_loss did not improve from 0.44641
Epoch 63/500
104/104 [==============================] - 3s 27ms/step - loss: 0.3877 - accuracy: 0.8577 - val_loss: 0.6011 - val_accuracy: 0.7907

Epoch 00063: val_loss did not improve from 0.44641
Epoch 64/500
104/104 [==============================] - 3s 27ms/step - loss: 0.4038 - accuracy: 0.8541 - val_loss: 0.3866 - val_accuracy: 0.8708

Epoch 00064: val_loss improved from 0.44641 to 0.38661, saving model to checkpoint_64_loss0.3866.h5
Epoch 65/500
104/104 [==============================] - 3s 27ms/step - loss: 0.3887 - accuracy: 0.8598 - val_loss: 0.4468 - val_accuracy: 0.8455

Epoch 00065: val_loss did not improve from 0.38661
Epoch 66/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3798 - accuracy: 0.8614 - val_loss: 1.5759 - val_accuracy: 0.4719

Epoch 00066: val_loss did not improve from 0.38661
Epoch 67/500
104/104 [==============================] - 3s 27ms/step - loss: 0.3736 - accuracy: 0.8647 - val_loss: 0.8492 - val_accuracy: 0.7331

Epoch 00067: val_loss did not improve from 0.38661
Epoch 68/500
104/104 [==============================] - 3s 27ms/step - loss: 0.3657 - accuracy: 0.8737 - val_loss: 0.4154 - val_accuracy: 0.8848

Epoch 00068: val_loss did not improve from 0.38661
Epoch 69/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3617 - accuracy: 0.8659 - val_loss: 0.5005 - val_accuracy: 0.8441

Epoch 00069: val_loss did not improve from 0.38661
Epoch 70/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3656 - accuracy: 0.8755 - val_loss: 1.8787 - val_accuracy: 0.4663

Epoch 00070: val_loss did not improve from 0.38661
Epoch 71/500
104/104 [==============================] - 3s 29ms/step - loss: 0.3550 - accuracy: 0.8797 - val_loss: 0.4434 - val_accuracy: 0.8666

Epoch 00071: val_loss did not improve from 0.38661
Epoch 72/500
104/104 [==============================] - 3s 27ms/step - loss: 0.3490 - accuracy: 0.8788 - val_loss: 0.7099 - val_accuracy: 0.7514

Epoch 00072: val_loss did not improve from 0.38661
Epoch 73/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3562 - accuracy: 0.8776 - val_loss: 0.6981 - val_accuracy: 0.7767

Epoch 00073: val_loss did not improve from 0.38661
Epoch 74/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3376 - accuracy: 0.8830 - val_loss: 1.1052 - val_accuracy: 0.6334

Epoch 00074: val_loss did not improve from 0.38661
Epoch 75/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3631 - accuracy: 0.8782 - val_loss: 0.4021 - val_accuracy: 0.8778

Epoch 00075: val_loss did not improve from 0.38661
Epoch 76/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3509 - accuracy: 0.8740 - val_loss: 0.5555 - val_accuracy: 0.8090

Epoch 00076: val_loss did not improve from 0.38661
Epoch 77/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3311 - accuracy: 0.8869 - val_loss: 2.2761 - val_accuracy: 0.3890

Epoch 00077: val_loss did not improve from 0.38661
Epoch 78/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3127 - accuracy: 0.8869 - val_loss: 0.9946 - val_accuracy: 0.7149

Epoch 00078: val_loss did not improve from 0.38661
Epoch 79/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3535 - accuracy: 0.8719 - val_loss: 1.1538 - val_accuracy: 0.6124

Epoch 00079: val_loss did not improve from 0.38661
Epoch 80/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3589 - accuracy: 0.8731 - val_loss: 0.5612 - val_accuracy: 0.8287

Epoch 00080: val_loss did not improve from 0.38661
Epoch 81/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3414 - accuracy: 0.8797 - val_loss: 0.3646 - val_accuracy: 0.8764

Epoch 00081: val_loss improved from 0.38661 to 0.36455, saving model to checkpoint_81_loss0.3646.h5
Epoch 82/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3221 - accuracy: 0.8806 - val_loss: 0.6056 - val_accuracy: 0.7823

Epoch 00082: val_loss did not improve from 0.36455
Epoch 83/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3080 - accuracy: 0.8923 - val_loss: 0.8531 - val_accuracy: 0.7205

Epoch 00083: val_loss did not improve from 0.36455
Epoch 84/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3479 - accuracy: 0.8794 - val_loss: 0.7649 - val_accuracy: 0.7683

Epoch 00084: val_loss did not improve from 0.36455
Epoch 85/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3331 - accuracy: 0.8869 - val_loss: 0.3584 - val_accuracy: 0.8919

Epoch 00085: val_loss improved from 0.36455 to 0.35840, saving model to checkpoint_85_loss0.3584.h5
Epoch 86/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3326 - accuracy: 0.8839 - val_loss: 0.5154 - val_accuracy: 0.8188

Epoch 00086: val_loss did not improve from 0.35840
Epoch 87/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3265 - accuracy: 0.8788 - val_loss: 0.4747 - val_accuracy: 0.8202

Epoch 00087: val_loss did not improve from 0.35840
Epoch 88/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3055 - accuracy: 0.9002 - val_loss: 0.8678 - val_accuracy: 0.7233

Epoch 00088: val_loss did not improve from 0.35840
Epoch 89/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3042 - accuracy: 0.8875 - val_loss: 0.8971 - val_accuracy: 0.6896

Epoch 00089: val_loss did not improve from 0.35840
Epoch 90/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3025 - accuracy: 0.8971 - val_loss: 1.1448 - val_accuracy: 0.6545

Epoch 00090: val_loss did not improve from 0.35840
Epoch 91/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2775 - accuracy: 0.9029 - val_loss: 1.1401 - val_accuracy: 0.6742

Epoch 00091: val_loss did not improve from 0.35840
Epoch 92/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2944 - accuracy: 0.8905 - val_loss: 0.4901 - val_accuracy: 0.8287

Epoch 00092: val_loss did not improve from 0.35840
Epoch 93/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2985 - accuracy: 0.8899 - val_loss: 0.8455 - val_accuracy: 0.6924

Epoch 00093: val_loss did not improve from 0.35840
Epoch 94/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2879 - accuracy: 0.8986 - val_loss: 0.8767 - val_accuracy: 0.7008

Epoch 00094: val_loss did not improve from 0.35840
Epoch 95/500
104/104 [==============================] - 3s 28ms/step - loss: 0.3111 - accuracy: 0.8989 - val_loss: 0.5017 - val_accuracy: 0.8511

Epoch 00095: val_loss did not improve from 0.35840
Epoch 96/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2938 - accuracy: 0.9035 - val_loss: 0.6139 - val_accuracy: 0.7781

Epoch 00096: val_loss did not improve from 0.35840
Epoch 97/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2972 - accuracy: 0.8920 - val_loss: 1.3138 - val_accuracy: 0.6110

Epoch 00097: val_loss did not improve from 0.35840
Epoch 98/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2696 - accuracy: 0.9041 - val_loss: 0.9026 - val_accuracy: 0.7191

Epoch 00098: val_loss did not improve from 0.35840
Epoch 99/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2749 - accuracy: 0.8998 - val_loss: 0.4231 - val_accuracy: 0.8511

Epoch 00099: val_loss did not improve from 0.35840
Epoch 100/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2813 - accuracy: 0.9032 - val_loss: 0.4161 - val_accuracy: 0.8638

Epoch 00100: val_loss did not improve from 0.35840
Epoch 101/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2839 - accuracy: 0.9047 - val_loss: 0.4303 - val_accuracy: 0.8581

Epoch 00101: val_loss did not improve from 0.35840
Epoch 102/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2712 - accuracy: 0.9086 - val_loss: 0.8433 - val_accuracy: 0.7233

Epoch 00102: val_loss did not improve from 0.35840
Epoch 103/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2790 - accuracy: 0.8977 - val_loss: 0.8272 - val_accuracy: 0.7654

Epoch 00103: val_loss did not improve from 0.35840
Epoch 104/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2512 - accuracy: 0.9074 - val_loss: 0.4882 - val_accuracy: 0.8287

Epoch 00104: val_loss did not improve from 0.35840
Epoch 105/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2843 - accuracy: 0.9059 - val_loss: 0.8630 - val_accuracy: 0.6854

Epoch 00105: val_loss did not improve from 0.35840
Epoch 106/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2782 - accuracy: 0.9002 - val_loss: 0.8379 - val_accuracy: 0.6938

Epoch 00106: val_loss did not improve from 0.35840
Epoch 107/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2719 - accuracy: 0.9050 - val_loss: 0.9612 - val_accuracy: 0.6966

Epoch 00107: val_loss did not improve from 0.35840
Epoch 108/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2612 - accuracy: 0.9062 - val_loss: 0.3543 - val_accuracy: 0.8975

Epoch 00108: val_loss improved from 0.35840 to 0.35429, saving model to checkpoint_108_loss0.3543.h5
Epoch 109/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2850 - accuracy: 0.9062 - val_loss: 0.4836 - val_accuracy: 0.8399

Epoch 00109: val_loss did not improve from 0.35429
Epoch 110/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2599 - accuracy: 0.9077 - val_loss: 0.9951 - val_accuracy: 0.7275

Epoch 00110: val_loss did not improve from 0.35429
Epoch 111/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2739 - accuracy: 0.8995 - val_loss: 0.5754 - val_accuracy: 0.8062

Epoch 00111: val_loss did not improve from 0.35429
Epoch 112/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2691 - accuracy: 0.9050 - val_loss: 1.1200 - val_accuracy: 0.6222

Epoch 00112: val_loss did not improve from 0.35429
Epoch 113/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2557 - accuracy: 0.9146 - val_loss: 0.3584 - val_accuracy: 0.8834

Epoch 00113: val_loss did not improve from 0.35429
Epoch 114/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2535 - accuracy: 0.9107 - val_loss: 0.6022 - val_accuracy: 0.8062

Epoch 00114: val_loss did not improve from 0.35429
Epoch 115/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2490 - accuracy: 0.9158 - val_loss: 0.5557 - val_accuracy: 0.8006

Epoch 00115: val_loss did not improve from 0.35429
Epoch 116/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2574 - accuracy: 0.9092 - val_loss: 0.8962 - val_accuracy: 0.7500

Epoch 00116: val_loss did not improve from 0.35429
Epoch 117/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2663 - accuracy: 0.9128 - val_loss: 0.3793 - val_accuracy: 0.8750

Epoch 00117: val_loss did not improve from 0.35429
Epoch 118/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2719 - accuracy: 0.9143 - val_loss: 0.5611 - val_accuracy: 0.8258

Epoch 00118: val_loss did not improve from 0.35429
Epoch 119/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2708 - accuracy: 0.9098 - val_loss: 0.3655 - val_accuracy: 0.8961

Epoch 00119: val_loss did not improve from 0.35429
Epoch 120/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2532 - accuracy: 0.9137 - val_loss: 0.3462 - val_accuracy: 0.8919

Epoch 00120: val_loss improved from 0.35429 to 0.34624, saving model to checkpoint_120_loss0.3462.h5
Epoch 121/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2531 - accuracy: 0.9092 - val_loss: 0.3214 - val_accuracy: 0.8862

Epoch 00121: val_loss improved from 0.34624 to 0.32143, saving model to checkpoint_121_loss0.3214.h5
Epoch 122/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2678 - accuracy: 0.9062 - val_loss: 0.5804 - val_accuracy: 0.8132

Epoch 00122: val_loss did not improve from 0.32143
Epoch 123/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2321 - accuracy: 0.9149 - val_loss: 0.3332 - val_accuracy: 0.8919

Epoch 00123: val_loss did not improve from 0.32143
Epoch 124/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2568 - accuracy: 0.9119 - val_loss: 0.9064 - val_accuracy: 0.6994

Epoch 00124: val_loss did not improve from 0.32143
Epoch 125/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2288 - accuracy: 0.9152 - val_loss: 0.6427 - val_accuracy: 0.7837

Epoch 00125: val_loss did not improve from 0.32143
Epoch 126/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2629 - accuracy: 0.9068 - val_loss: 0.2990 - val_accuracy: 0.9087

Epoch 00126: val_loss improved from 0.32143 to 0.29899, saving model to checkpoint_126_loss0.2990.h5
Epoch 127/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2288 - accuracy: 0.9206 - val_loss: 1.7818 - val_accuracy: 0.5211

Epoch 00127: val_loss did not improve from 0.29899
Epoch 128/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2380 - accuracy: 0.9191 - val_loss: 0.5805 - val_accuracy: 0.8258

Epoch 00128: val_loss did not improve from 0.29899
Epoch 129/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2524 - accuracy: 0.9122 - val_loss: 0.3414 - val_accuracy: 0.8876

Epoch 00129: val_loss did not improve from 0.29899
Epoch 130/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2300 - accuracy: 0.9200 - val_loss: 0.3161 - val_accuracy: 0.9017

Epoch 00130: val_loss did not improve from 0.29899
Epoch 131/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2131 - accuracy: 0.9239 - val_loss: 1.0363 - val_accuracy: 0.6938

Epoch 00131: val_loss did not improve from 0.29899
Epoch 132/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2402 - accuracy: 0.9188 - val_loss: 0.3615 - val_accuracy: 0.8904

Epoch 00132: val_loss did not improve from 0.29899
Epoch 133/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2411 - accuracy: 0.9212 - val_loss: 0.8001 - val_accuracy: 0.7711

Epoch 00133: val_loss did not improve from 0.29899
Epoch 134/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2286 - accuracy: 0.9242 - val_loss: 0.4637 - val_accuracy: 0.8567

Epoch 00134: val_loss did not improve from 0.29899
Epoch 135/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2344 - accuracy: 0.9188 - val_loss: 0.6373 - val_accuracy: 0.7837

Epoch 00135: val_loss did not improve from 0.29899
Epoch 136/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2308 - accuracy: 0.9233 - val_loss: 0.4202 - val_accuracy: 0.8750

Epoch 00136: val_loss did not improve from 0.29899
Epoch 137/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2241 - accuracy: 0.9197 - val_loss: 0.3382 - val_accuracy: 0.8876

Epoch 00137: val_loss did not improve from 0.29899
Epoch 138/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2084 - accuracy: 0.9263 - val_loss: 0.3832 - val_accuracy: 0.8848

Epoch 00138: val_loss did not improve from 0.29899
Epoch 139/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2199 - accuracy: 0.9203 - val_loss: 0.3208 - val_accuracy: 0.9073

Epoch 00139: val_loss did not improve from 0.29899
Epoch 140/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2127 - accuracy: 0.9236 - val_loss: 0.3928 - val_accuracy: 0.8778

Epoch 00140: val_loss did not improve from 0.29899
Epoch 141/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2135 - accuracy: 0.9293 - val_loss: 0.6525 - val_accuracy: 0.7949

Epoch 00141: val_loss did not improve from 0.29899
Epoch 142/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2319 - accuracy: 0.9167 - val_loss: 0.4119 - val_accuracy: 0.8722

Epoch 00142: val_loss did not improve from 0.29899
Epoch 143/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2409 - accuracy: 0.9149 - val_loss: 0.3095 - val_accuracy: 0.8961

Epoch 00143: val_loss did not improve from 0.29899
Epoch 144/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2220 - accuracy: 0.9215 - val_loss: 0.6125 - val_accuracy: 0.8160

Epoch 00144: val_loss did not improve from 0.29899
Epoch 145/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2416 - accuracy: 0.9242 - val_loss: 0.4668 - val_accuracy: 0.8497

Epoch 00145: val_loss did not improve from 0.29899
Epoch 146/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2202 - accuracy: 0.9278 - val_loss: 2.7021 - val_accuracy: 0.4312

Epoch 00146: val_loss did not improve from 0.29899
Epoch 147/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2252 - accuracy: 0.9212 - val_loss: 0.3755 - val_accuracy: 0.8876

Epoch 00147: val_loss did not improve from 0.29899
Epoch 148/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2158 - accuracy: 0.9227 - val_loss: 0.5224 - val_accuracy: 0.8399

Epoch 00148: val_loss did not improve from 0.29899
Epoch 149/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2054 - accuracy: 0.9290 - val_loss: 0.6407 - val_accuracy: 0.7949

Epoch 00149: val_loss did not improve from 0.29899
Epoch 150/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2101 - accuracy: 0.9296 - val_loss: 0.2727 - val_accuracy: 0.9115

Epoch 00150: val_loss improved from 0.29899 to 0.27266, saving model to checkpoint_150_loss0.2727.h5
Epoch 151/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2098 - accuracy: 0.9305 - val_loss: 0.5419 - val_accuracy: 0.8174

Epoch 00151: val_loss did not improve from 0.27266
Epoch 152/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2136 - accuracy: 0.9251 - val_loss: 1.1923 - val_accuracy: 0.6278

Epoch 00152: val_loss did not improve from 0.27266
Epoch 153/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2122 - accuracy: 0.9269 - val_loss: 0.3897 - val_accuracy: 0.8820

Epoch 00153: val_loss did not improve from 0.27266
Epoch 154/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1993 - accuracy: 0.9299 - val_loss: 0.3577 - val_accuracy: 0.9017

Epoch 00154: val_loss did not improve from 0.27266
Epoch 155/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2072 - accuracy: 0.9221 - val_loss: 0.4387 - val_accuracy: 0.8750

Epoch 00155: val_loss did not improve from 0.27266
Epoch 156/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2226 - accuracy: 0.9278 - val_loss: 0.8477 - val_accuracy: 0.7626

Epoch 00156: val_loss did not improve from 0.27266
Epoch 157/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2198 - accuracy: 0.9215 - val_loss: 0.2863 - val_accuracy: 0.9059

Epoch 00157: val_loss did not improve from 0.27266
Epoch 158/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2013 - accuracy: 0.9305 - val_loss: 0.5395 - val_accuracy: 0.8118

Epoch 00158: val_loss did not improve from 0.27266
Epoch 159/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1811 - accuracy: 0.9329 - val_loss: 0.3040 - val_accuracy: 0.8947

Epoch 00159: val_loss did not improve from 0.27266
Epoch 160/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1947 - accuracy: 0.9341 - val_loss: 0.3955 - val_accuracy: 0.8596

Epoch 00160: val_loss did not improve from 0.27266
Epoch 161/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1939 - accuracy: 0.9335 - val_loss: 0.4262 - val_accuracy: 0.8539

Epoch 00161: val_loss did not improve from 0.27266
Epoch 162/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2104 - accuracy: 0.9248 - val_loss: 0.4240 - val_accuracy: 0.8834

Epoch 00162: val_loss did not improve from 0.27266
Epoch 163/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2135 - accuracy: 0.9257 - val_loss: 0.4295 - val_accuracy: 0.8610

Epoch 00163: val_loss did not improve from 0.27266
Epoch 164/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1877 - accuracy: 0.9356 - val_loss: 0.3396 - val_accuracy: 0.9073

Epoch 00164: val_loss did not improve from 0.27266
Epoch 165/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2001 - accuracy: 0.9242 - val_loss: 0.3454 - val_accuracy: 0.8834

Epoch 00165: val_loss did not improve from 0.27266
Epoch 166/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2057 - accuracy: 0.9236 - val_loss: 0.7384 - val_accuracy: 0.7879

Epoch 00166: val_loss did not improve from 0.27266
Epoch 167/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1835 - accuracy: 0.9341 - val_loss: 0.4068 - val_accuracy: 0.8638

Epoch 00167: val_loss did not improve from 0.27266
Epoch 168/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1818 - accuracy: 0.9359 - val_loss: 0.8876 - val_accuracy: 0.6924

Epoch 00168: val_loss did not improve from 0.27266
Epoch 169/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2000 - accuracy: 0.9305 - val_loss: 0.4922 - val_accuracy: 0.8357

Epoch 00169: val_loss did not improve from 0.27266
Epoch 170/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2349 - accuracy: 0.9209 - val_loss: 1.0188 - val_accuracy: 0.7570

Epoch 00170: val_loss did not improve from 0.27266
Epoch 171/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1940 - accuracy: 0.9329 - val_loss: 1.0850 - val_accuracy: 0.6952

Epoch 00171: val_loss did not improve from 0.27266
Epoch 172/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2036 - accuracy: 0.9317 - val_loss: 0.3779 - val_accuracy: 0.8778

Epoch 00172: val_loss did not improve from 0.27266
Epoch 173/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1949 - accuracy: 0.9314 - val_loss: 0.3023 - val_accuracy: 0.9045

Epoch 00173: val_loss did not improve from 0.27266
Epoch 174/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1922 - accuracy: 0.9338 - val_loss: 0.3479 - val_accuracy: 0.8904

Epoch 00174: val_loss did not improve from 0.27266
Epoch 175/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1825 - accuracy: 0.9365 - val_loss: 0.5632 - val_accuracy: 0.8258

Epoch 00175: val_loss did not improve from 0.27266
Epoch 176/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1816 - accuracy: 0.9398 - val_loss: 0.3431 - val_accuracy: 0.8708

Epoch 00176: val_loss did not improve from 0.27266
Epoch 177/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2190 - accuracy: 0.9254 - val_loss: 1.1825 - val_accuracy: 0.6362

Epoch 00177: val_loss did not improve from 0.27266
Epoch 178/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1690 - accuracy: 0.9450 - val_loss: 0.5972 - val_accuracy: 0.8385

Epoch 00178: val_loss did not improve from 0.27266
Epoch 179/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2059 - accuracy: 0.9308 - val_loss: 0.2787 - val_accuracy: 0.9213

Epoch 00179: val_loss did not improve from 0.27266
Epoch 180/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1872 - accuracy: 0.9305 - val_loss: 1.0652 - val_accuracy: 0.7402

Epoch 00180: val_loss did not improve from 0.27266
Epoch 181/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1887 - accuracy: 0.9383 - val_loss: 1.0379 - val_accuracy: 0.6910

Epoch 00181: val_loss did not improve from 0.27266
Epoch 182/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1902 - accuracy: 0.9329 - val_loss: 0.4278 - val_accuracy: 0.8722

Epoch 00182: val_loss did not improve from 0.27266
Epoch 183/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1674 - accuracy: 0.9432 - val_loss: 0.5100 - val_accuracy: 0.8399

Epoch 00183: val_loss did not improve from 0.27266
Epoch 184/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2077 - accuracy: 0.9293 - val_loss: 0.5185 - val_accuracy: 0.8216

Epoch 00184: val_loss did not improve from 0.27266
Epoch 185/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2048 - accuracy: 0.9338 - val_loss: 0.2866 - val_accuracy: 0.9157

Epoch 00185: val_loss did not improve from 0.27266
Epoch 186/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1762 - accuracy: 0.9371 - val_loss: 0.6105 - val_accuracy: 0.7907

Epoch 00186: val_loss did not improve from 0.27266
Epoch 187/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1923 - accuracy: 0.9347 - val_loss: 0.6512 - val_accuracy: 0.8132

Epoch 00187: val_loss did not improve from 0.27266
Epoch 188/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2043 - accuracy: 0.9305 - val_loss: 0.9260 - val_accuracy: 0.7458

Epoch 00188: val_loss did not improve from 0.27266
Epoch 189/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1806 - accuracy: 0.9377 - val_loss: 0.6275 - val_accuracy: 0.8076

Epoch 00189: val_loss did not improve from 0.27266
Epoch 190/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1742 - accuracy: 0.9395 - val_loss: 0.3065 - val_accuracy: 0.8975

Epoch 00190: val_loss did not improve from 0.27266
Epoch 191/500
104/104 [==============================] - 3s 30ms/step - loss: 0.1643 - accuracy: 0.9405 - val_loss: 0.9320 - val_accuracy: 0.7219

Epoch 00191: val_loss did not improve from 0.27266
Epoch 192/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1827 - accuracy: 0.9414 - val_loss: 0.2681 - val_accuracy: 0.9185

Epoch 00192: val_loss improved from 0.27266 to 0.26806, saving model to checkpoint_192_loss0.2681.h5
Epoch 193/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1798 - accuracy: 0.9365 - val_loss: 0.2956 - val_accuracy: 0.9143

Epoch 00193: val_loss did not improve from 0.26806
Epoch 194/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1597 - accuracy: 0.9429 - val_loss: 0.2893 - val_accuracy: 0.9087

Epoch 00194: val_loss did not improve from 0.26806
Epoch 195/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1583 - accuracy: 0.9432 - val_loss: 0.3592 - val_accuracy: 0.8848

Epoch 00195: val_loss did not improve from 0.26806
Epoch 196/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1830 - accuracy: 0.9353 - val_loss: 0.7300 - val_accuracy: 0.7472

Epoch 00196: val_loss did not improve from 0.26806
Epoch 197/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1903 - accuracy: 0.9350 - val_loss: 0.3798 - val_accuracy: 0.8947

Epoch 00197: val_loss did not improve from 0.26806
Epoch 198/500
104/104 [==============================] - 3s 29ms/step - loss: 0.2021 - accuracy: 0.9317 - val_loss: 1.2837 - val_accuracy: 0.6292

Epoch 00198: val_loss did not improve from 0.26806
Epoch 199/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1872 - accuracy: 0.9320 - val_loss: 0.4106 - val_accuracy: 0.8525

Epoch 00199: val_loss did not improve from 0.26806
Epoch 200/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1576 - accuracy: 0.9426 - val_loss: 0.9334 - val_accuracy: 0.7809

Epoch 00200: val_loss did not improve from 0.26806
Epoch 201/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1603 - accuracy: 0.9462 - val_loss: 0.5375 - val_accuracy: 0.8469

Epoch 00201: val_loss did not improve from 0.26806
Epoch 202/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1940 - accuracy: 0.9392 - val_loss: 0.4254 - val_accuracy: 0.8975

Epoch 00202: val_loss did not improve from 0.26806
Epoch 203/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1655 - accuracy: 0.9432 - val_loss: 2.3291 - val_accuracy: 0.5239

Epoch 00203: val_loss did not improve from 0.26806
Epoch 204/500
104/104 [==============================] - 3s 28ms/step - loss: 0.1983 - accuracy: 0.9326 - val_loss: 0.4199 - val_accuracy: 0.8904

Epoch 00204: val_loss did not improve from 0.26806
Epoch 205/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1707 - accuracy: 0.9417 - val_loss: 0.2768 - val_accuracy: 0.9199

Epoch 00205: val_loss did not improve from 0.26806
Epoch 206/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1666 - accuracy: 0.9420 - val_loss: 0.5247 - val_accuracy: 0.8638

Epoch 00206: val_loss did not improve from 0.26806
Epoch 207/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1460 - accuracy: 0.9468 - val_loss: 0.3017 - val_accuracy: 0.9045

Epoch 00207: val_loss did not improve from 0.26806
Epoch 208/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1533 - accuracy: 0.9435 - val_loss: 0.3077 - val_accuracy: 0.9059

Epoch 00208: val_loss did not improve from 0.26806
Epoch 209/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1654 - accuracy: 0.9495 - val_loss: 0.2838 - val_accuracy: 0.8989

Epoch 00209: val_loss did not improve from 0.26806
Epoch 210/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1635 - accuracy: 0.9411 - val_loss: 0.3121 - val_accuracy: 0.9143

Epoch 00210: val_loss did not improve from 0.26806
Epoch 211/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1681 - accuracy: 0.9426 - val_loss: 0.4535 - val_accuracy: 0.8680

Epoch 00211: val_loss did not improve from 0.26806
Epoch 212/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1840 - accuracy: 0.9408 - val_loss: 0.7510 - val_accuracy: 0.7528

Epoch 00212: val_loss did not improve from 0.26806
Epoch 213/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1747 - accuracy: 0.9380 - val_loss: 0.6655 - val_accuracy: 0.7879

Epoch 00213: val_loss did not improve from 0.26806
Epoch 214/500
104/104 [==============================] - 3s 28ms/step - loss: 0.2059 - accuracy: 0.9302 - val_loss: 0.3420 - val_accuracy: 0.8890

Epoch 00214: val_loss did not improve from 0.26806
Epoch 215/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1639 - accuracy: 0.9386 - val_loss: 0.4833 - val_accuracy: 0.8511

Epoch 00215: val_loss did not improve from 0.26806
Epoch 216/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1763 - accuracy: 0.9417 - val_loss: 0.9017 - val_accuracy: 0.7486

Epoch 00216: val_loss did not improve from 0.26806
Epoch 217/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1548 - accuracy: 0.9480 - val_loss: 0.4260 - val_accuracy: 0.8610

Epoch 00217: val_loss did not improve from 0.26806
Epoch 218/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1476 - accuracy: 0.9486 - val_loss: 0.6741 - val_accuracy: 0.8258

Epoch 00218: val_loss did not improve from 0.26806
Epoch 219/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1620 - accuracy: 0.9495 - val_loss: 0.4864 - val_accuracy: 0.8736

Epoch 00219: val_loss did not improve from 0.26806
Epoch 220/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1504 - accuracy: 0.9495 - val_loss: 0.6128 - val_accuracy: 0.8160

Epoch 00220: val_loss did not improve from 0.26806
Epoch 221/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1780 - accuracy: 0.9438 - val_loss: 0.5788 - val_accuracy: 0.8413

Epoch 00221: val_loss did not improve from 0.26806
Epoch 222/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1797 - accuracy: 0.9411 - val_loss: 0.4146 - val_accuracy: 0.8708

Epoch 00222: val_loss did not improve from 0.26806
Epoch 223/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1520 - accuracy: 0.9468 - val_loss: 0.4719 - val_accuracy: 0.8778

Epoch 00223: val_loss did not improve from 0.26806
Epoch 224/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1556 - accuracy: 0.9480 - val_loss: 0.4693 - val_accuracy: 0.8848

Epoch 00224: val_loss did not improve from 0.26806
Epoch 225/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1622 - accuracy: 0.9477 - val_loss: 0.4195 - val_accuracy: 0.8736

Epoch 00225: val_loss did not improve from 0.26806
Epoch 226/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1649 - accuracy: 0.9441 - val_loss: 0.9015 - val_accuracy: 0.7472

Epoch 00226: val_loss did not improve from 0.26806
Epoch 227/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1490 - accuracy: 0.9453 - val_loss: 0.5610 - val_accuracy: 0.8497

Epoch 00227: val_loss did not improve from 0.26806
Epoch 228/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1542 - accuracy: 0.9468 - val_loss: 0.2889 - val_accuracy: 0.9157

Epoch 00228: val_loss did not improve from 0.26806
Epoch 229/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1567 - accuracy: 0.9477 - val_loss: 0.5529 - val_accuracy: 0.8272

Epoch 00229: val_loss did not improve from 0.26806
Epoch 230/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1475 - accuracy: 0.9489 - val_loss: 0.9554 - val_accuracy: 0.7458

Epoch 00230: val_loss did not improve from 0.26806
Epoch 231/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1634 - accuracy: 0.9453 - val_loss: 0.5784 - val_accuracy: 0.8441

Epoch 00231: val_loss did not improve from 0.26806
Epoch 232/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1629 - accuracy: 0.9420 - val_loss: 0.3014 - val_accuracy: 0.9199

Epoch 00232: val_loss did not improve from 0.26806
Epoch 233/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1434 - accuracy: 0.9531 - val_loss: 0.2895 - val_accuracy: 0.9213

Epoch 00233: val_loss did not improve from 0.26806
Epoch 234/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1524 - accuracy: 0.9495 - val_loss: 0.3490 - val_accuracy: 0.9031

Epoch 00234: val_loss did not improve from 0.26806
Epoch 235/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1607 - accuracy: 0.9480 - val_loss: 0.3049 - val_accuracy: 0.9213

Epoch 00235: val_loss did not improve from 0.26806
Epoch 236/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1373 - accuracy: 0.9531 - val_loss: 0.3686 - val_accuracy: 0.9031

Epoch 00236: val_loss did not improve from 0.26806
Epoch 237/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1301 - accuracy: 0.9531 - val_loss: 0.3845 - val_accuracy: 0.8933

Epoch 00237: val_loss did not improve from 0.26806
Epoch 238/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1296 - accuracy: 0.9516 - val_loss: 0.5640 - val_accuracy: 0.8581

Epoch 00238: val_loss did not improve from 0.26806
Epoch 239/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1523 - accuracy: 0.9483 - val_loss: 0.3914 - val_accuracy: 0.9045

Epoch 00239: val_loss did not improve from 0.26806
Epoch 240/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1633 - accuracy: 0.9510 - val_loss: 0.3718 - val_accuracy: 0.9101

Epoch 00240: val_loss did not improve from 0.26806
Epoch 241/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1498 - accuracy: 0.9519 - val_loss: 0.4552 - val_accuracy: 0.8792

Epoch 00241: val_loss did not improve from 0.26806
Epoch 242/500
104/104 [==============================] - 3s 29ms/step - loss: 0.1675 - accuracy: 0.9435 - val_loss: 1.2180 - val_accuracy: 0.7317

Epoch 00242: val_loss did not improve from 0.26806
In [ ]:
# Load the weights that gave the best performance on validation set
model.load_weights('./checkpoint_192_loss0.2681.h5')
In [ ]:
# Score trained model.
scores = model.evaluate(X_test, y_test, verbose=1)
print('Test loss:', np.round(scores[0], 4))
print('Test accuracy:', np.round(scores[1], 4))
23/23 [==============================] - 0s 11ms/step - loss: 0.2533 - accuracy: 0.9341
Test loss: 0.2533
Test accuracy: 0.9341
In [ ]:
y_pred = model.predict(X_test)
In [ ]:
# Confusion matrix
cm = confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True)
plt.title('Confusion Matrix', fontsize=20)
plt.xlabel('Predicted class', fontsize=15)
plt.ylabel('True class', fontsize=15);
In [ ]:
print(classification_report(y_test.argmax(axis=1), y_pred.argmax(axis=1)))
              precision    recall  f1-score   support

           0       0.96      0.96      0.96        81
           1       0.94      0.96      0.95        68
           2       0.87      0.90      0.89        30
           3       0.97      0.94      0.96        34
           4       1.00      0.98      0.99        95
           5       0.93      0.93      0.93        56
           6       0.85      1.00      0.92        33
           7       0.97      1.00      0.98        83
           8       0.98      0.91      0.95        70
           9       1.00      0.88      0.93        32
          10       0.82      0.72      0.77        43
          11       0.85      0.91      0.88        88

    accuracy                           0.93       713
   macro avg       0.93      0.92      0.92       713
weighted avg       0.94      0.93      0.93       713

  • The model could achieve an accuracy of 93.4 % on the test set.
  • The model's performance is excellent to predict class 4 (Common Chickweed) and is worse to predict class 10 (Black-grass)
  • This difference in performance among classes can be explained by the number of image samples available in each class. There were 611 images of Common Chickweed vs 263 images of Black-grass available in the dataset.

Model's predictions

In [ ]:
# Reverse the dictionary of labels
rev_label_dict = {v:k for k,v in label_dict.items()}
rev_label_dict
Out[ ]:
{0: 'Small-flowered Cranesbill',
 1: 'Fat Hen',
 2: 'Shepherds Purse',
 3: 'Common wheat',
 4: 'Common Chickweed',
 5: 'Charlock',
 6: 'Cleavers',
 7: 'Scentless Mayweed',
 8: 'Sugar beet',
 9: 'Maize',
 10: 'Black-grass',
 11: 'Loose Silky-bent'}
In [ ]:
# Checking model's prediction for test sample 2
print('Model prediction for Test sample 2:')
print('Predicted class:', rev_label_dict[model.predict(X_test[2].reshape(1, 128, 128, 3)).argmax()])
print('True class:', rev_label_dict[y_test[2].argmax()])
plt.imshow(X_test[2])
plt.show()
print(50 * '-')

# Checking model's prediction for test sample 3
print('Model prediction for Test sample 3:')
print('Predicted class:', rev_label_dict[model.predict(X_test[3].reshape(1, 128, 128, 3)).argmax()])
print('True class:', rev_label_dict[y_test[3].argmax()])
plt.imshow(X_test[3])
plt.show()
print(50 * '-')

# Checking model's prediction for test sample 33
print('Model prediction for Test sample 33:')
print('Predicted class:', rev_label_dict[model.predict(X_test[33].reshape(1, 128, 128, 3)).argmax()])
print('True class:', rev_label_dict[y_test[33].argmax()])
plt.imshow(X_test[33])
plt.show()
print(50 * '-')

# Checking model's prediction for test sample 36
print('Model prediction for Test sample 36:')
print('Predicted class:', rev_label_dict[model.predict(X_test[36].reshape(1, 128, 128, 3)).argmax()])
print('True class:', rev_label_dict[y_test[36].argmax()])
plt.imshow(X_test[36])
plt.show()
print(50 * '-')

# Checking model's prediction for test sample 59
print('Model prediction for Test sample 59:')
print('Predicted class:', rev_label_dict[model.predict(X_test[59].reshape(1, 128, 128, 3)).argmax()])
print('True class:', rev_label_dict[y_test[59].argmax()])
plt.imshow(X_test[59])
plt.show()
print(50 * '-')
Model prediction for Test sample 2:
Predicted class: Loose Silky-bent
True class: Loose Silky-bent
--------------------------------------------------
Model prediction for Test sample 3:
Predicted class: Fat Hen
True class: Fat Hen
--------------------------------------------------
Model prediction for Test sample 33:
Predicted class: Small-flowered Cranesbill
True class: Small-flowered Cranesbill
--------------------------------------------------
Model prediction for Test sample 36:
Predicted class: Black-grass
True class: Black-grass
--------------------------------------------------
Model prediction for Test sample 59:
Predicted class: Charlock
True class: Charlock
--------------------------------------------------